28 research outputs found
Convex Optimization-based Policy Adaptation to Compensate for Distributional Shifts
Many real-world systems often involve physical components or operating
environments with highly nonlinear and uncertain dynamics. A number of
different control algorithms can be used to design optimal controllers for such
systems, assuming a reasonably high-fidelity model of the actual system.
However, the assumptions made on the stochastic dynamics of the model when
designing the optimal controller may no longer be valid when the system is
deployed in the real-world. The problem addressed by this paper is the
following: Suppose we obtain an optimal trajectory by solving a control problem
in the training environment, how do we ensure that the real-world system
trajectory tracks this optimal trajectory with minimal amount of error in a
deployment environment. In other words, we want to learn how we can adapt an
optimal trained policy to distribution shifts in the environment. Distribution
shifts are problematic in safety-critical systems, where a trained policy may
lead to unsafe outcomes during deployment. We show that this problem can be
cast as a nonlinear optimization problem that could be solved using heuristic
method such as particle swarm optimization (PSO). However, if we instead
consider a convex relaxation of this problem, we can learn policies that track
the optimal trajectory with much better error performance, and faster
computation times. We demonstrate the efficacy of our approach on tracking an
optimal path using a Dubin's car model, and collision avoidance using both a
linear and nonlinear model for adaptive cruise control
Conformance Testing as Falsification for Cyber-Physical Systems
In Model-Based Design of Cyber-Physical Systems (CPS), it is often desirable
to develop several models of varying fidelity. Models of different fidelity
levels can enable mathematical analysis of the model, control synthesis, faster
simulation etc. Furthermore, when (automatically or manually) transitioning
from a model to its implementation on an actual computational platform, then
again two different versions of the same system are being developed. In all
previous cases, it is necessary to define a rigorous notion of conformance
between different models and between models and their implementations. This
paper argues that conformance should be a measure of distance between systems.
Albeit a range of theoretical distance notions exists, a way to compute such
distances for industrial size systems and models has not been proposed yet.
This paper addresses exactly this problem. A universal notion of conformance as
closeness between systems is rigorously defined, and evidence is presented that
this implies a number of other application-dependent conformance notions. An
algorithm for detecting that two systems are not conformant is then proposed,
which uses existing proven tools. A method is also proposed to measure the
degree of conformance between two systems. The results are demonstrated on a
range of models
Data-Driven Reachability Analysis of Stochastic Dynamical Systems with Conformal Inference
We consider data-driven reachability analysis of discrete-time stochastic
dynamical systems using conformal inference. We assume that we are not provided
with a symbolic representation of the stochastic system, but instead have
access to a dataset of -step trajectories. The reachability problem is to
construct a probabilistic flowpipe such that the probability that a -step
trajectory can violate the bounds of the flowpipe does not exceed a
user-specified failure probability threshold. The key ideas in this paper are:
(1) to learn a surrogate predictor model from data, (2) to perform reachability
analysis using the surrogate model, and (3) to quantify the surrogate model's
incurred error using conformal inference in order to give probabilistic
reachability guarantees. We focus on learning-enabled control systems with
complex closed-loop dynamics that are difficult to model symbolically, but
where state transition pairs can be queried, e.g., using a simulator. We
demonstrate the applicability of our method on examples from the domain of
learning-enabled cyber-physical systems
Conformance Testing for Stochastic Cyber-Physical Systems
Conformance is defined as a measure of distance between the behaviors of two
dynamical systems. The notion of conformance can accelerate system design when
models of varying fidelities are available on which analysis and control design
can be done more efficiently. Ultimately, conformance can capture distance
between design models and their real implementations and thus aid in robust
system design. In this paper, we are interested in the conformance of
stochastic dynamical systems. We argue that probabilistic reasoning over the
distribution of distances between model trajectories is a good measure for
stochastic conformance. Additionally, we propose the non-conformance risk to
reason about the risk of stochastic systems not being conformant. We show that
both notions have the desirable transference property, meaning that conformant
systems satisfy similar system specifications, i.e., if the first model
satisfies a desirable specification, the second model will satisfy (nearly) the
same specification. Lastly, we propose how stochastic conformance and the
non-conformance risk can be estimated from data using statistical tools such as
conformal prediction. We present empirical evaluations of our method on an F-16
aircraft, an autonomous vehicle, a spacecraft, and Dubin's vehicle
Conformal Prediction for STL Runtime Verification
We are interested in predicting failures of cyber-physical systems during
their operation. Particularly, we consider stochastic systems and signal
temporal logic specifications, and we want to calculate the probability that
the current system trajectory violates the specification. The paper presents
two predictive runtime verification algorithms that predict future system
states from the current observed system trajectory. As these predictions may
not be accurate, we construct prediction regions that quantify prediction
uncertainty by using conformal prediction, a statistical tool for uncertainty
quantification. Our first algorithm directly constructs a prediction region for
the satisfaction measure of the specification so that we can predict
specification violations with a desired confidence. The second algorithm
constructs prediction regions for future system states first, and uses these to
obtain a prediction region for the satisfaction measure. To the best of our
knowledge, these are the first formal guarantees for a predictive runtime
verification algorithm that applies to widely used trajectory predictors such
as RNNs and LSTMs, while being computationally simple and making no assumptions
on the underlying distribution. We present numerical experiments of an F-16
aircraft and a self-driving car